基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
人们普遍认为,对于准确的语义细分,必须使用昂贵的操作(例如,非常卷积)结合使用昂贵的操作(例如非常卷积),从而导致缓慢的速度和大量的内存使用。在本文中,我们质疑这种信念,并证明既不需要高度的内部决议也不是必需的卷积。我们的直觉是,尽管分割是一个每像素的密集预测任务,但每个像素的语义通常都取决于附近的邻居和遥远的环境。因此,更强大的多尺度功能融合网络起着至关重要的作用。在此直觉之后,我们重新访问常规的多尺度特征空间(通常限制为P5),并将其扩展到更丰富的空间,最小的P9,其中最小的功能仅为输入大小的1/512,因此具有很大的功能接受场。为了处理如此丰富的功能空间,我们利用最近的BIFPN融合了多尺度功能。基于这些见解,我们开发了一个简化的分割模型,称为ESEG,该模型既没有内部分辨率高,也没有昂贵的严重卷积。也许令人惊讶的是,与多个数据集相比,我们的简单方法可以以比以前的艺术更快地实现更高的准确性。在实时设置中,ESEG-Lite-S在189 fps的CityScapes [12]上达到76.0%MIOU,表现优于更快的[9](73.1%MIOU时为170 fps)。我们的ESEG-LITE-L以79 fps的速度运行,达到80.1%MIOU,在很大程度上缩小了实时和高性能分割模型之间的差距。
translated by 谷歌翻译
我们提出了一种称为基本的组合缩放方法,可在ImageNet ILSVRC-2012验证集上实现85.7%的前1个零点精度,超越了最佳发布的零拍模型 - 剪辑并对齐 - 达9.3%。我们的基本模式还显示出鲁棒性基准的显着改进。例如,在5个测试集中,具有自然分布换档,如想象的 - {A,R,V2,素描}和ObjectNet,我们的车型实现了83.7%的前1个平均精度,只有一个小幅度从其原始的想象精度下降。为实现这些结果,我们扩大了剪辑的对比学习框架,并在三个方面对齐:数据大小,型号大小和批量大小。我们的数据集具有6.6B噪声图像文本对,比对齐的4倍,比夹子大16倍。我们最大的型号具有3B重量,参数比为3.75倍,拖鞋比对齐和夹子更大。我们的批量尺寸为65536,比剪辑的2倍,4倍超过对齐。缩放的主要挑战是我们的加速器的内存有限,如GPU和TPU。因此,我们提出了一种在线渐变缓存的简单方法来克服这个限制。
translated by 谷歌翻译
Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets (pronounced "coat" nets), a family of hybrid models built from two key insights:(1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.1 The initial projection stage can be seen as an aggressive down-sampling convolutional stem.
translated by 谷歌翻译
This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop these models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv. Our experiments show that EfficientNetV2 models train much faster than state-of-the-art models while being up to 6.8x smaller.Our training can be further sped up by progressively increasing the image size during training, but it often causes a drop in accuracy. To compensate for this accuracy drop, we propose an improved method of progressive learning, which adaptively adjusts regularization (e.g. data augmentation) along with image size.With progressive learning, our EfficientNetV2 significantly outperforms previous models on Im-ageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our Effi-cientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources. Code is available at https://github.com/google/ automl/tree/master/efficientnetv2.
translated by 谷歌翻译
Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an opposite perspective: adversarial examples can be used to improve image recognition models if harnessed in the right manner. We propose AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to our method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. For instance, by applying AdvProp to the latest EfficientNet-B7 [41] on ImageNet, we achieve significant improvements on ImageNet (+0.7%), ImageNet-C (+6.5%), ImageNet-A (+7.0%) and Stylized-ImageNet (+4.8%). With an enhanced EfficientNet-B8, our method achieves the state-of-the-art 85.5% ImageNet top-1 accuracy without extra data. This result even surpasses the best model in [24] which is trained with 3.5B Instagram images (∼3000× more than ImageNet) and ∼9.4× more parameters. Models are available at https://github.com/tensorflow/tpu/tree/ master/models/official/efficientnet.
translated by 谷歌翻译
Model efficiency has become increasingly important in computer vision. In this paper, we systematically study neural network architecture design choices for object detection and propose several key optimizations to improve efficiency. First, we propose a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multiscale feature fusion; Second, we propose a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time. Based on these optimizations and better backbones, we have developed a new family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with singlemodel and single-scale, our EfficientDet-D7 achieves stateof-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOPs 1 , being 4x -9x smaller and using 13x -42x fewer FLOPs than previous detectors. Code is available at https://github.com/google/automl/tree/ master/efficientdet.
translated by 谷歌翻译
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https: //github.com/tensorflow/tpu/tree/ master/models/official/efficientnet.
translated by 谷歌翻译
We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardwareaware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2% more accurate on ImageNet classification while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small is 6.6% more accurate compared to a MobileNetV2 model with comparable latency. MobileNetV3-Large detection is over 25% faster at roughly the same accuracy as Mo-bileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.
translated by 谷歌翻译
Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difficult to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike previous work, where latency is considered via another, often inaccurate proxy (e.g., FLOPS), our approach directly measures real-world inference latency by executing the model on mobile phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that encourages layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our MnasNet achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is 1.8× faster than MobileNetV2 [29] with 0.5% higher accuracy and 2.3× faster than NASNet [36] with 1.2% higher accuracy. Our MnasNet also achieves better mAP quality than MobileNets for COCO object detection. Code is at https://github.com/tensorflow/tpu/ tree/master/models/official/mnasnet.
translated by 谷歌翻译